现有的类新终身学习研究仅使用单标签的数据,这限制了其对多标签数据的适应性。本文研究了终身多标签(LML)分类,该分类在连续的多标签分类数据流中构建了在线类新型分类器。在LML分类中使用部分标签的数据培训可能会导致旧课程中更严重的灾难性遗忘。为了解决该问题,该研究提出了一个增强图卷积网络(AGCN),并在顺序的部分标签任务中具有建筑增强相关矩阵(ACM)。两个基准的结果表明,该方法可有效地分类和减少遗忘。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
时间序列预测一直是科学研究的热点。随着人工智能的发展,通过仿生研究和改进过去的方法,新时期序列预测方法已经获得了更好的预测效果和预测性能。可见性图(VG)算法通常用于先前研究中的时间序列预测,但预测效果不如人工神经网络(ANN),卷积神经网络(CNN)和长期短期等深度学习预测方法内存网络(LSTM)预测。 VG算法包含丰富的网络信息,但之前的研究没有有效地使用网络信息来进行预测,导致相对大的预测错误。为了解决这个问题,本文通过VG的仿生设计和过去研究的扩展,提出了深度可见性系列(DVS)模块,这是第一次将VG与仿生设计和深网络联合起来。通过将生物视觉的仿生设计应用于VG,DVS的时间序列已经获得了卓越的预测精度,这对时间序列预测产生了贡献。与此同时,本文将DVS预测方法应用于建设成本指数预测,具有实际意义。
translated by 谷歌翻译
点击率预测是商业推荐系统中的核心任务之一。它旨在预测用户点击给定用户和项目特征的特定项目的概率。随着特征相互作用引入非线性,它们被广泛采用以提高CTR预测模型的性能。因此,有效的建模特征互动在研究和工业领域引起了很多关注。目前的方法通常可以分为三类:(1)NA \“IVE方法,它不会模拟特征交互,只使用原始特征;(2)记忆方法,通过显式将其视为新功能而记住功能交互。分配可培训嵌入式;(3)分解方法,学习原始特征的潜在矢量和通过分解功能的隐式模型相互作用。研究表明,由于不同特征相互作用的独特特征,这些方法之一的建模特征交互是次优。为了解决这个问题,我们首先提出一个称为OptInter的一般框架,该框架可以找到每个功能交互的最合适的建模方法。可以将不同的最先进的深度CTR模型视为optinter的实例。实现功能Optinter,我们还介绍了一种自动搜索最佳建模方法的学习算法。W e在四个大型数据集中进行广泛的实验。我们的实验表明,Optinter可提高最佳的最先进的基线深度CTR模型,高达2.21%。与回忆的方法相比,这也优于基线,我们减少了高达91%的参数。此外,我们进行了几项消融研究,以研究Optinter不同组分的影响。最后,我们提供关于替代替代品结果的可解释讨论。
translated by 谷歌翻译
今天的时间序列在当今的许多领域都引起了很多关注。基于复杂网络分析的时间序列预测算法是研究热点。如何使用时间序列信息来实现更准确的预测是一个问题。为了解决这个问题,本文提出了一种加权网络预测方法,以提高预测准确性。首先,时间序列将转换为一个复杂的网络,并将找到节点之间的相似性。然后,相似性将用作权重,以对不同节点产生的预测值进行加权预测。与先前的方法相比,提出的方法更准确。为了验证所提出的方法的效果,在M1,M3数据集和施工成本指数(CCI)数据集上测试了实验部分,这表明所提出的方法具有更准确的预测性能。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Decompilation aims to transform a low-level program language (LPL) (eg., binary file) into its functionally-equivalent high-level program language (HPL) (e.g., C/C++). It is a core technology in software security, especially in vulnerability discovery and malware analysis. In recent years, with the successful application of neural machine translation (NMT) models in natural language processing (NLP), researchers have tried to build neural decompilers by borrowing the idea of NMT. They formulate the decompilation process as a translation problem between LPL and HPL, aiming to reduce the human cost required to develop decompilation tools and improve their generalizability. However, state-of-the-art learning-based decompilers do not cope well with compiler-optimized binaries. Since real-world binaries are mostly compiler-optimized, decompilers that do not consider optimized binaries have limited practical significance. In this paper, we propose a novel learning-based approach named NeurDP, that targets compiler-optimized binaries. NeurDP uses a graph neural network (GNN) model to convert LPL to an intermediate representation (IR), which bridges the gap between source code and optimized binary. We also design an Optimized Translation Unit (OTU) to split functions into smaller code fragments for better translation performance. Evaluation results on datasets containing various types of statements show that NeurDP can decompile optimized binaries with 45.21% higher accuracy than state-of-the-art neural decompilation frameworks.
translated by 谷歌翻译
Driven by improved architectures and better representation learning frameworks, the field of visual recognition has enjoyed rapid modernization and performance boost in the early 2020s. For example, modern ConvNets, represented by ConvNeXt, have demonstrated strong performance in various scenarios. While these models were originally designed for supervised learning with ImageNet labels, they can also potentially benefit from self-supervised learning techniques such as masked autoencoders (MAE). However, we found that simply combining these two approaches leads to subpar performance. In this paper, we propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation. We also provide pre-trained ConvNeXt V2 models of various sizes, ranging from an efficient 3.7M-parameter Atto model with 76.7% top-1 accuracy on ImageNet, to a 650M Huge model that achieves a state-of-the-art 88.9% accuracy using only public training data.
translated by 谷歌翻译